7 research outputs found

    Mixture of beamformers for speech separation and extraction

    Get PDF
    In many audio applications, the signal of interest is corrupted by acoustic background noise, interference, and reverberation. The presence of these contaminations can significantly degrade the quality and intelligibility of the audio signal. This makes it important to develop signal processing methods that can separate the competing sources and extract a source of interest. The estimated signals may then be either directly listened to, transmitted, or further processed, giving rise to a wide range of applications such as hearing aids, noise-cancelling headphones, human-computer interaction, surveillance, and hands-free telephony. Many of the existing approaches to speech separation/extraction relied on beamforming techniques. These techniques approach the problem from a spatial point of view; a microphone array is used to form a spatial filter which can extract a signal from a specific direction and reduce the contamination of signals from other directions. However, when there are fewer microphones than sources (the underdetermined case), perfect attenuation of all interferers becomes impossible and only partial interference attenuation is possible. In this thesis, we present a framework which extends the use of beamforming techniques to underdetermined speech mixtures. We describe frequency domain non-linear mixture of beamformers that can extract a speech source from a known direction. Our approach models the data in each frequency bin via Gaussian mixture distributions, which can be learned using the expectation maximization algorithm. The model learning is performed using the observed mixture signals only, and no prior training is required. The signal estimator comprises of a set of minimum mean square error (MMSE), minimum variance distortionless response (MVDR), or minimum power distortionless response (MPDR) beamformers. In order to estimate the signal, all beamformers are concurrently applied to the observed signal, and the weighted sum of the beamformers’ outputs is used as the signal estimator, where the weights are the estimated posterior probabilities of the Gaussian mixture states. These weights are specific to each timefrequency point. The resulting non-linear beamformers do not need to know or estimate the number of sources, and can be applied to microphone arrays with two or more microphones with arbitrary array configuration. We test and evaluate the described methods on underdetermined speech mixtures. Experimental results for the non-linear beamformers in underdetermined mixtures with room reverberation confirm their capability to successfully extract speech sources

    An Automated Platform for Gathering and Managing Open-Source Cyber Threat Intelligence

    No full text
    The community has begun paying more attention to source OSCTI Cyber Threat Intelligence to stay informed about the rapidly changing cyber threat landscape. Numerous reports from the OSCTI frequently provide Information about dangers. However, current OSCTI gathering and management tools have mainly concentrated on individual minor compromise indicators, despite the urgent need for high-quality OSCTI. The relationship between higher-level notions (including the strategies, methods, and processes) and the connections between them, which hold crucial Information about dangerous behaviors and are crucial to revealing the full dangerous situation, have been disregarded. Therefore, we present SecurityKG, an automated OSCTI collection and administration system. SecurityKG collects OSCTI to extract high-fidelity knowledge about threat behaviours to address the void. Using a mixture of AI and NLP approaches, a security know-how graph is then constructed from a wide variety of sources. To facilitate knowledge graph exploration, SecurityKG provides a user interface (UI) that supports multiple forms of interactivity

    Effectiveness of Anodal otDCS Following with Anodal tDCS Rather than tDCS Alone for Increasing of Relative Power of Intrinsic Matched EEG Bands in Rat Brains

    No full text
    Background: This study sought to determine whether (1) evidence is available of interactions between anodal tDCS and oscillated tDCS stimulation patterns to increase the power of endogenous brain oscillations and (2) the frequency matching the applied anodal otDCS’s frequency and the brain’s dominant intrinsic frequency influence power shifting during stimulation pattern sessions by both anodal DCS and anodal oscillated DCS. Method: Rats received different anodal tDCS and otDCS stimulation patterns using 8.5 Hz and 13 Hz state-related dominant intrinsic frequencies of anodal otDCS. The rats were divided into groups with specific stimulation patterns: group A: tDCS–otDCS (8.5 Hz)–otDCS (13 Hz); group B: otDCS (8.5 Hz)–tDCS–otDCS (13 Hz); group C: otDCS (13 Hz)–tDCS–otDCS (8.5 Hz). Acute relative power changes (i.e., following 10 min stimulation sessions) in six frequency bands—delta (1.5–4 Hz), theta (4–7 Hz), alpha-1 (7–10 Hz), alpha-2 (10–12 Hz), beta-1 (12–15 Hz) and beta-2 (15–20 Hz)—were compared using three factors and repeated ANOVA measurement. Results: For each stimulation, tDCS increased theta power band and, above bands alpha and beta, a drop in delta power was observed. Anodal otDCS had a mild increasing power effect in both matched intrinsic and delta bands. In group pattern stimulations, increased power of endogenous frequencies matched exogenous otDCS frequencies—8.5 Hz or 13 Hz—with more potent effects in upper bands. The power was markedly more potent with the otDCS–tDCS stimulation pattern than the tDCS–otDCS pattern. Significance: The findings suggest that the otDCS–tDCS pattern stimulation increased the power in matched intrinsic oscillations and, significantly, in the above bands in an ascending order. We provide evidence for the successful corporation between otDCS (as frequency-matched guidance) and tDCS (as a power generator) rather than tDCS alone when stimulating a desired brain intrinsic band (herein, tES specificity)
    corecore